AAAI AI-Alert for Feb 21, 2023
The big idea: should robots take over fighting crime?
San Francisco's board of supervisors recently voted to let their police deploy robots equipped with lethal explosives โ before backtracking several weeks later. In America, the vote sparked a fierce debate on the militarisation of the police, but it raises fundamental questions for us all about the role of robots and AI in fighting crime, how policing decisions are made and, indeed, the very purpose of our criminal justice systems. In the UK, officers operate under the principle of "policing by consent" rather than by force. But according to the 2020 Crime Survey for England and Wales, public confidence in the police has fallen from 62% in 2017 to 55%. One recent poll asked Londoners if the Met was institutionally sexist and racist.
How AI can actually be helpful in disaster response
But one effort from the US Department of Defense does seem to be effective: xView2. Though it's still in its early phases of deployment, this visual computing project has already helped with disaster logistics and on the ground rescue missions in Turkey. An open-source project that was sponsored and developed by the Pentagon's Defense Innovation Unit and Carnegie Mellon University's Software Engineering Institute in 2019, xView2 has collaborated with many research partners, including Microsoft and the University of California, Berkeley. It uses machine-learning algorithms in conjunction with satellite imagery from other providers to identify building and infrastructure damage in the disaster area and categorize its severity much faster than is possible with current methods. Ritwik Gupta, the principal AI scientist at the Defense Innovation Unit and a researcher at Berkeley, tells me this means the program can directly help first responders and recovery experts on the ground quickly get an assessment that can aid in finding survivors and help coordinate reconstruction efforts over time.
How AI chatbots in search engines will completely change the internet
The progress of artificial intelligence models over the past few years has been faster than almost anyone expected. Some advances have left society scrabbling to adapt. Teachers are struggling to stop students using chatbots to write their essays, artists claim they are losing paid work to image-creating AIs and efforts are under way in some places to replace journalists with large language models. But bigger changes are afoot.
Threats, mistakes and 'Sydney' -- Microsoft's new AI is acting unhinged
Chatbots like Bing have kicked off a major new AI arms race between the biggest tech companies. Though Google, Microsoft, Amazon and Facebook have invested in AI tech for years, it's mostly worked to improve existing products, like search or content-recommendation algorithms. But when the start-up company OpenAI began making public its "generative" AI tools -- including the popular ChatGPT chatbot -- it led competitors to brush away their previous, relatively cautious approaches to the tech.
Tesla's Recall of Full Self-Driving Targets a 'Fundamental' Flaw
After years selling its controversial Full-Self Driving software upgrade for thousands of dollars, Tesla today issued a recall for every one of the nearly 363,000 vehicles using the feature. The move was prompted by a US government agency saying the software had in "rare circumstances" put drivers in danger and could increase the risk of a crash in everyday situations. Recalls are common in the auto industry and mostly target particular parts or road situations. Tesla's latest recall is sweeping, with the National Highway Traffic Safety Administration saying the Full Self-Driving software can break local traffic laws and act in a way the driver doesn't expect in a grab bag of road situations. According to the agency's filing, those include driving through a yellow light on the verge of turning red; not properly stopping at a stop sign; speeding, due to failing to detect a road sign or because the driver has set their car to default to a faster speed; and making unexpected lane changes to move out of turn-only lanes when going straight through an intersection.
Why Chatbots Sometimes Act Weird and Spout Nonsense
The Bing chatbot is powered by a kind of artificial intelligence called a neural network. That may sound like a computerized brain, but the term is misleading. A neural network is just a mathematical system that learns skills by analyzing vast amounts of digital data. As a neural network examines thousands of cat photos, for instance, it can learn to recognize a cat. Most people use neural networks every day. It's the technology that identifies people, pets and other objects in images posted to internet services like Google Photos.
Google Vice President Warns That AI Chatbots Are Hallucinating
Speaking to German newspaper Welt am Sonntag, Google vice president Prabhakar Raghavan warned that users may be delivered complete nonsense by chatbots, despite answers seeming coherent. Google is set to launch its own rival to OpenAI's ChatGPT, a language model that can answer your questions and queries. Named Bard, the chatbot will roll out to the public in the coming weeks according to Google CEO Sundar Pichai. Ahead of the launch, Google demonstrated the powers of Bard in a promo video. Unfortunately, people noticed that the chatbot โ a scaled-down version of their Language Model for Dialogue Applications (LaMDA) which convinced one engineer it was sentient โ came up with incorrect statements about the JWST.
Rovers Are So Yesterday. It's Time to Send a Snakebot to Space
If the boxy Opportunity rover could elicit years of anthropomorphized love and goodwill, then surely Earthlings will warm to the idea of sending a snake-shaped robot to the moon. This robot--the brainchild of students at Northeastern University--is meant to wiggle across difficult terrain, measure water in the pit of craters, and bite its own tail to become a spinning ouroboros tumbling down the side of a lunar cliff. NASA's annual Big Idea Challenge presents a new query each year that's geared toward an engineering problem the agency needs to solve. In fall 2021, students from universities across the United States set out to design a robot that could survive extreme lunar terrain and send data back to Earth. The winning team, of students from Northeastern's Students for the Exploration and Development of Space club, took home the top prize in November and now hope to turn their winning design into an advanced prototype that could actually be sent to the moon.
Amazon tests robotaxis on California roads with employees as passengers
Amazon is testing a fleet of robotaxis on public roads in California, using employees as passengers, as the tech behemoth moves closer to a commercial service for the general public. The online retailer has been aggressively expanding into self-driving technology and bought the self-driving startup Zoox for $1.3bn in 2020. A test conducted on 11 February saw the robotaxis successfully drive between two Zoox buildings a mile apart at its headquarters in Foster City, California. It was part of the launch of a no-cost employee shuttle service that will also help the company refine its technology. Zoox's robotaxi โ built as a fully autonomous vehicle from scratch rather than retrofitting existing cars for self-driving โ comes without a steering wheel or pedals and has room for four passengers, with two facing each other.
Deploying a multidisciplinary strategy with embedded responsible AI
The risk landscape of AI is broad and evolving. For instance, ML models, which are often developed using vast, complex, and continuously updated datasets, require a high level of digitization and connectivity in software and engineering pipelines. Yet the eradication of IT silos, both within the enterprise and potentially with external partners, increases the attack surface for cyber criminals and hackers. Cyber security and resilience is an essential component of the digital transformation agenda on which AI depends. A second established risk is bias. Because historical social inequities are baked into raw data, they can be codified--and magnified--in automated decisions leading, for instance, to unfair credit, loan, and insurance decisions.